kafka Too many open files

您所在的位置:网站首页 neo4j too many files open kafka Too many open files

kafka Too many open files

#kafka Too many open files| 来源: 网络整理| 查看: 265

Answer a question

Have you ever had a similar problem about kafka? I get this error: Too many open files. I don't know why. Here are some logs:

[2018-08-27 10:07:26,268] ERROR Error while deleting the clean shutdown file in dir /home/weihu/kafka/kafka/logs (kafka.server.LogD) java.nio.file.FileSystemException: /home/weihu/kafka/kafka/logs/BC_20180821_1_LOCATION-87/leader-epoch-checkpoint: Too many open fis at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) at java.nio.file.Files.newByteChannel(Files.java:361) at java.nio.file.Files.createFile(Files.java:632) at kafka.server.checkpoints.CheckpointFile.(CheckpointFile.scala:45) at kafka.server.checkpoints.LeaderEpochCheckpointFile.(LeaderEpochCheckpointFile.scala:62) at kafka.log.Log.initializeLeaderEpochCache(Log.scala:278) at kafka.log.Log.(Log.scala:211) at kafka.log.Log$.apply(Log.scala:1748) at kafka.log.LogManager.loadLog(LogManager.scala:265) at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [2018-08-27 10:07:26,268] ERROR Error while deleting the clean shutdown file in dir /home/weihu/kafka/kafka/logs (kafka.server.LogD) java.nio.file.FileSystemException: /home/weihu/kafka/kafka/logs/BC_20180822_PARSE-136/leader-epoch-checkpoint: Too many open files at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) at java.nio.file.Files.newByteChannel(Files.java:361) at java.nio.file.Files.createFile(Files.java:632) at kafka.server.checkpoints.CheckpointFile.(CheckpointFile.scala:45) at kafka.server.checkpoints.LeaderEpochCheckpointFile.(LeaderEpochCheckpointFile.scala:62) at kafka.log.Log.initializeLeaderEpochCache(Log.scala:278) at kafka.log.Log.(Log.scala:211) at kafka.log.Log$.apply(Log.scala:1748) at kafka.log.LogManager.loadLog(LogManager.scala:265) at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [2018-08-27 10:07:26,269] ERROR Error while deleting the clean shutdown file in dir /home/weihu/kafka/kafka/logs (kafka.server.LogD) java.nio.file.FileSystemException: /home/weihu/kafka/kafka/logs/BC_20180813_1_STATISTICS-402/leader-epoch-checkpoint: Too many opens at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) at java.nio.file.Files.newByteChannel(Files.java:361) at java.nio.file.Files.createFile(Files.java:632) at kafka.server.checkpoints.CheckpointFile.(CheckpointFile.scala:45) at kafka.server.checkpoints.LeaderEpochCheckpointFile.(LeaderEpochCheckpointFile.scala:62) at kafka.log.Log.initializeLeaderEpochCache(Log.scala:278) at kafka.log.Log.(Log.scala:211) at kafka.log.Log$.apply(Log.scala:1748) at kafka.log.LogManager.loadLog(LogManager.scala:265) at kafka.log.LogManager.$anonfun$loadLogs$12(LogManager.scala:335) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Answers

In Kafka, every topic is (optionally) split into many partitions. For each partition some files are maintained by brokers (for index and actual data).

kafka-topics --zookeeper localhost:2181 --describe --topic topic_name

will give you the number of partitions for topic topic_name. The default number of partitions per topic num.partitions is defined under /etc/kafka/server.properties

The total number of open files could be very huge if the broker hosts many partitions and a particular partition has many log segment files.

You can see the current file descriptor limit by running

ulimit -n

You can also check the number of open files using lsof:

lsof | wc -l

To solve the issue you either need to change the limit of open file descriptors:

ulimit -n

or somehow reduce the number of open files (for example, reduce number of partitions per topic).



【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3